Goto

Collaborating Authors

 Russell County


Finite-Time Minimax Bounds and an Optimal Lyapunov Policy in Queueing Control

Liu, Yujie, Tan, Vincent Y. F., Xu, Yunbei

arXiv.org Artificial Intelligence

We introduce an original minimax framework for finite-time performance analysis in queueing control and propose a surprisingly simple Lyapunov-based scheduling policy with superior finite-time performance. The framework quantitatively characterizes how the expected total queue length scales with key system parameters, including the capacity of the scheduling set and the variability of arrivals and departures across queues. To our knowledge, this provides the first firm foundation for evaluating and comparing scheduling policies in the finite-time regime, including nonstationary settings, and shows that the proposed policy can provably and empirically outperform classical MaxWeight in finite time. Within this framework, we establish three main sets of results. First, we derive minimax lower bounds on the expected total queue length for parallel-queue scheduling via a novel Brownian coupling argument. Second, we propose a new policy, LyapOpt, which minimizes the full quadratic Lyapunov drift-capturing both first- and second-order terms-and achieves optimal finite-time performance in heavy traffic while retaining classical stability guarantees. Third, we identify a key limitation of the classical MaxWeight policy, which optimizes only the first-order drift: its finite-time performance depends suboptimally on system parameters, leading to substantially larger backlogs in explicitly characterized settings. Together, these results delineate the scope and limitations of classical drift-based scheduling and motivate new queueing-control methods with rigorous finite-time guarantees.


FIRE: Semantic Field of Words Represented as Nonlinear Functions

Neural Information Processing Systems

The strength of a word's relation to another word at a certain location is measured as the function value at that location. With FIRE, compositionality is represented via functional addi-tivity, whereas polysemy is represented via the set of points and the function's



Composite Safety Potential Field for Highway Driving Risk Assessment

Zuo, Dachuan, Bian, Zilin, Zuo, Fan, Ozbay, Kaan

arXiv.org Artificial Intelligence

In the era of rapid advancements in vehicle safety technologies, driving risk assessment has become a focal point of attention. Technologies such as collision warning systems, advanced driver assistance systems (ADAS), and autonomous driving require driving risks to be evaluated proactively and in real time. To be effective, driving risk assessment metrics must not only accurately identify potential collisions but also exhibit human-like reasoning to enable safe and seamless interactions between vehicles. Existing safety potential field models assess driving risks by considering both objective and subjective safety factors. However, their practical applicability in real-world risk assessment tasks is limited. These models are often challenging to calibrate due to the arbitrary nature of their structures, and calibration can be inefficient because of the scarcity of accident statistics. Additionally, they struggle to generalize across both longitudinal and lateral risks. To address these challenges, we propose a composite safety potential field framework, namely C-SPF, involving a subjective field to capture drivers' risk perception about spatial proximity and an objective field to quantify the imminent collision probability, to comprehensively evaluate driving risks. The C-SPF is calibrated using abundant two-dimensional spacing data from trajectory datasets, enabling it to effectively capture drivers' proximity risk perception and provide a more realistic explanation of driving behaviors. Analysis of a naturalistic driving dataset demonstrates that the C-SPF can capture both longitudinal and lateral risks that trigger drivers' safety maneuvers. Further case studies highlight the C-SPF's ability to explain lateral driver behaviors, such as abandoning lane changes or adjusting lateral position relative to adjacent vehicles, which are capabilities that existing models fail to achieve.


Recurrent Interpolants for Probabilistic Time Series Prediction

Chen, Yu, Biloš, Marin, Mittal, Sarthak, Deng, Wei, Rasul, Kashif, Schneider, Anderson

arXiv.org Machine Learning

Sequential models such as recurrent neural networks or transformer-based models became \textit{de facto} tools for multivariate time series forecasting in a probabilistic fashion, with applications to a wide range of datasets, such as finance, biology, medicine, etc. Despite their adeptness in capturing dependencies, assessing prediction uncertainty, and efficiency in training, challenges emerge in modeling high-dimensional complex distributions and cross-feature dependencies. To tackle these issues, recent works delve into generative modeling by employing diffusion or flow-based models. Notably, the integration of stochastic differential equations or probability flow successfully extends these methods to probabilistic time series imputation and forecasting. However, scalability issues necessitate a computational-friendly framework for large-scale generative model-based predictions. This work proposes a novel approach by blending the computational efficiency of recurrent neural networks with the high-quality probabilistic modeling of the diffusion model, which addresses challenges and advances generative models' application in time series forecasting. Our method relies on the foundation of stochastic interpolants and the extension to a broader conditional generation framework with additional control features, offering insights for future developments in this dynamic field.


Extragradient Type Methods for Riemannian Variational Inequality Problems

Hu, Zihao, Wang, Guanghui, Wang, Xi, Wibisono, Andre, Abernethy, Jacob, Tao, Molei

arXiv.org Artificial Intelligence

Riemannian convex optimization and minimax optimization have recently drawn considerable attention. Their appeal lies in their capacity to adeptly manage the non-convexity of the objective function as well as constraints inherent in the feasible set in the Euclidean sense. In this work, we delve into monotone Riemannian Variational Inequality Problems (RVIPs), which encompass both Riemannian convex optimization and minimax optimization as particular cases. In the context of Euclidean space, it is established that the last-iterates of both the extragradient (EG) and past extragradient (PEG) methods converge to the solution of monotone variational inequality problems at a rate of $O\left(\frac{1}{\sqrt{T}}\right)$ (Cai et al., 2022). However, analogous behavior on Riemannian manifolds remains an open question. To bridge this gap, we introduce the Riemannian extragradient (REG) and Riemannian past extragradient (RPEG) methods. We demonstrate that both exhibit $O\left(\frac{1}{\sqrt{T}}\right)$ last-iterate convergence. Additionally, we show that the average-iterate convergence of both REG and RPEG is $O\left(\frac{1}{{T}}\right)$, aligning with observations in the Euclidean case (Mokhtari et al., 2020). These results are enabled by judiciously addressing the holonomy effect so that additional complications in Riemannian cases can be reduced and the Euclidean proof inspired by the performance estimation problem (PEP) technique or the sum-of-squares (SOS) technique can be applied again.


Microscopy is All You Need

Kalinin, Sergei V., Vasudevan, Rama, Liu, Yongtao, Ghosh, Ayana, Roccapriore, Kevin, Ziatdinov, Maxim

arXiv.org Artificial Intelligence

We pose that microscopy offers an ideal real-world experimental environment for the development and deployment of active Bayesian and reinforcement learning methods. Indeed, the tremendous progress achieved by machine learning (ML) and artificial intelligence over the last decade has been largely achieved via the utilization of static data sets, from the paradigmatic MNIST to the bespoke corpora of text and image data used to train large models such as GPT3, DALLE and others. However, it is now recognized that continuous, minute improvements to state-of-the-art do not necessarily translate to advances in real-world applications. We argue that a promising pathway for the development of ML methods is via the route of domain-specific deployable algorithms in areas such as electron and scanning probe microscopy and chemical imaging. This will benefit both fundamental physical studies and serve as a test bed for more complex autonomous systems such as robotics and manufacturing. Favorable environment characteristics of scanning and electron microscopy include low risk, extensive availability of domain-specific priors and rewards, relatively small effects of exogeneous variables, and often the presence of both upstream first principles as well as downstream learnable physical models for both statics and dynamics. Recent developments in programmable interfaces, edge computing, and access to APIs facilitating microscope control, all render the deployment of ML codes on operational microscopes straightforward. We discuss these considerations and hope that these arguments will lead to creating a novel set of development targets for the ML community by accelerating both real-world ML applications and scientific progress.


Stochastic Modeling of Inhomogeneities in the Aortic Wall and Uncertainty Quantification using a Bayesian Encoder-Decoder Surrogate

Ranftl, Sascha, Rolf-Pissarczyk, Malte, Wolkerstorfer, Gloria, Pepe, Antonio, Egger, Jan, von der Linden, Wolfgang, Holzapfel, Gerhard A.

arXiv.org Artificial Intelligence

Inhomogeneities in the aortic wall can lead to localized stress accumulations, possibly initiating dissection. In many cases, a dissection results from pathological changes such as fragmentation or loss of elastic fibers. But it has been shown that even the healthy aortic wall has an inherent heterogeneous microstructure. Some parts of the aorta are particularly susceptible to the development of inhomogeneities due to pathological changes, however, the distribution in the aortic wall and the spatial extent, such as size, shape, and type, are difficult to predict. Motivated by this observation, we describe the heterogeneous distribution of elastic fiber degradation in the dissected aortic wall using a stochastic constitutive model. For this purpose, random field realizations, which model the stochastic distribution of degraded elastic fibers, are generated over a non-equidistant grid. The random field then serves as input for a uni-axial extension test of the pathological aortic wall, solved with the finite-element (FE) method. To include the microstructure of the dissected aortic wall, a constitutive model developed in a previous study is applied, which also includes an approach to model the degradation of inter-lamellar elastic fibers. Then to assess the uncertainty in the output stress distribution due to this stochastic constitutive model, a convolutional neural network, specifically a Bayesian encoder-decoder, was used as a surrogate model that maps the random input fields to the output stress distribution obtained from the FE analysis. The results show that the neural network is able to predict the stress distribution of the FE analysis while significantly reducing the computational time. In addition, it provides the probability for exceeding critical stresses within the aortic wall, which could allow for the prediction of delamination or fatal rupture.


Scale-covariant and scale-invariant Gaussian derivative networks

Lindeberg, Tony

arXiv.org Artificial Intelligence

This paper presents a hybrid approach between scale-space theory and deep learning, where a deep learning architecture is constructed by coupling parameterized scale-space operations in cascade. By sharing the learnt parameters between multiple scale channels, and by using the transformation properties of the scale-space primitives under scaling transformations, the resulting network becomes provably scale covariant. By in addition performing max pooling over the multiple scale channels, a resulting network architecture for image classification also becomes provably scale invariant. We investigate the performance of such networks on the MNISTLargeScale dataset, which contains rescaled images from original MNIST over a factor of 4 concerning training data and over a factor of 16 concerning testing data. It is demonstrated that the resulting approach allows for scale generalization, enabling good performance for classifying patterns at scales not present in the training data.


Efficiently Approximating the Pareto Frontier: Hydropower Dam Placement in the Amazon Basin

Wu, Xiaojian (Cornell University ) | Gomes-Selman, Jonathan (Stanford University) | Shi, Qinru (Cornell University) | Xue, Yexiang (Cornell University) | Garcia-Villacorta, Roosevelt (Cornell University) | Anderson, Elizabeth (Florida International University) | Sethi, Suresh (U.S. Geological Survey, New York Cooperative Fish and Wildlife Unit, Cornell University ) | Steinschneider, Scott (Cornell University) | Flecker, Alexander (Cornell University ) | Gomes, Carla (Cornell University )

AAAI Conferences

Real-world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small epsilon > 0 on tree-structured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/epsilon. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.